Structured Light Encoding and Decoding Algorithm Based on Adjacency-Hopping de Bruijn Sequences
Zhenglin Liang, Bin Chen, and Shiqian Wu
ObjectiveThe rapid advancement of modern information technology has led to the increasing maturation of three-dimensional (3D) shape measurement technologies. At present, this technology has been applied to biomedicine, cultural relic protection, man-machine interaction, and so on. Structured light measurement emerges as a prominent 3D measurement technology, distinguished by its non-contact, high precision, and rapid speed. It stands as one of the most extensively utilized and reliable 3D measurement technologies. The de Bruijn sequence, noted for the uniqueness of any fixed length subsequence within the entire sequence, is widely employed in structured light coding. In discrete sequence coding, only one projection pattern coded by a de Bruijn sequence is required to measure the 3D information of an object, ensuring high measurement efficiency. In continuous phase-shifting coding, the de Bruijn sequence is applied to code the phase order to assist in the phase unwrapping process. However, the presence of identical consecutive codes in a de Bruijn sequence makes it challenging to precisely determine fringe numbers and positions within uniform color areas in captured images. In this paper, to solve this problem, a new type of de Bruijn sequence named adjacency-hopping de Bruijn sequence is proposed. Such sequences guarantee that all neighboring codes are different while holding the uniqueness of the subsequences. These two properties lay the foundation for accurate decoding and efficient matching. Meanwhile, an efficient and complete structured light coding and decoding process is devised by combining the adjacency-hopping de Bruijn sequence with the phase-shifting method to complete the 3D measurement task.MethodsAccording to graph theory, generating a de Bruijn sequence can be accomplished by systematically traversing an Eulerian tour on a de Bruijn graph. In this paper, we redefine the vertex and edge sets of the de Bruijn graph to construct a specialized oriented graph. This oriented graph ensures that adjacent codes of each vertex are different. As a result, a unique type of de Bruijn sequence called an adjacency-hopping sequence, where all neighboring codes are guaranteed to be different, can be generated by traversing an Eulerian tour on the oriented graph. This specialized sequence is then employed to encode phase orders of the phase-shifting fringes. Specifically, the phase-shifting images are embedded into the red channel, while the phase order-encoded images via the proposed adjacency-hopping sequence are embedded into the green and blue channels. In the decoding process, color images captured by the camera are separated to calculate the wrapping phase and decode the phase order respectively. Subsequently, the Hash lookup algorithm is utilized for sequence matching, facilitating the determination of the phase order. Ultimately, 3D information is achieved.Results and DiscussionsInitially, a comparative experiment is devised to compare classic de Bruijn sequence-based coding approaches (e.g. the original de Bruijn sequence, the multi-slit de Bruijn sequence, and the recursive binary XOR sequence) with the proposed adjacency-hopping de Bruijn sequence coding method, showcasing the advantages of the proposed sequence in discrete coding. The experimental results illustrate that similar to the improved de Bruijn sequence-based approaches (i.e., the multi-slit de Bruijn sequence and the recursive binary XOR sequence), the proposed method effectively addresses the fringe separation problem encountered in the original de Bruijn sequence. Furthermore, compared with the aforementioned improved methods, the proposed adjacency-hopping de Bruijn sequence coding method demonstrates higher matching efficiency and is more suitable for integration with phase-shifting measurements. Subsequently, a series of practical measurement experiments are designed to further illustrate the processing flow of the proposed method and evaluate its performances, such as stability, measurement efficiency, and accuracy. The experimental results demonstrate that the coding and decoding method presented in this paper exhibits good robustness in scenarios involving optical path occlusions. Hence, it can be applied to measure objects with complex surface structures. Moreover, the proposed coding and decoding method achieves measurement accuracy comparable to the selected comparative phase-shifting approaches while significantly reducing the number of projected patterns, resulting in improved measurement efficiency.ConclusionsWe introduce a special de Bruijn sequence named the adjacency-hopping de Bruijn sequence. We theoretically prove the existence of such sequences and elucidate their generation method. The proposed sequence guarantees that all neighboring codes are different while preserving the uniqueness of subsequences. Then, the proposed sequence is employed to encode phase orders, and a novel phase-shifting-based coding method is finally introduced. On the projection side, the proposed method leads to a significant reduction in the number of projected patterns, thereby improving the projection efficiency. On the decoding side, each phase order-coded fringe can be separated accurately while guaranteeing efficient matching. The experimental results demonstrate that compared with the classical complementary Gray-code plus phase-shifting method and the multi-frequency heterodyne method, the proposed method achieves comparable measurement accuracy while reducing the number of projection patterns from 11 or 12 to 4.
  • Apr. 25, 2024
  • Acta Optica Sinica
  • Vol. 44, Issue 8, 0812002 (2024)
  • DOI:10.3788/AOS231652
Small-Scale Rockfall Detection Method Based on Solid-State Lidar for Unstructured Transportation Roads in Open-Pit Mines
Qinghua Gu, Jiawei Li, Lu Chen, and Hejie Zhu
To address the challenges faced in the real-time detection of small-size rockfalls in open-pit mines during the transportation of ores using unmanned carts owing to suboptimal road conditions, intense lighting, and heavy dust, this study proposes a method for detecting small-size rockfalls in open-pit mines based on solid-state lidar. The proposed method employed a double-echo lidar for data acquisition, effectively reducing dust interference and extracting the driving area in front of the vehicle. Subsequently, a ground segmentation algorithm (straight-line fitting) based on fan surfaces was employed to segment the rough and unstructured terrains having slopes. Moreover, a hierarchical grid tree model known as octree was introduced to enhance the efficiency of neighborhood search. Furthermore, the two-color nearest pair method was applied to construct a graph, rapidly generating the clusters. Finally, the concept of adaptive clustering radius ε was adopted for clustering and obtaining the box models of small-size rockfalls. The experimental results demonstrate that the proposed method outperforms the k-d tree-accelerated DBSCAN algorithm, increasing the positive detection rate by 9.61 percentage points and reducing the detection time by 379.77 ms.
  • Apr. 25, 2024
  • Laser & Optoelectronics Progress
  • Vol. 61, Issue 8, 0812006 (2024)
  • DOI:10.3788/LOP230765
Position and Pose Estimation of Rigid Body Based on Three-Dimensional Digital Image Correlation
Yonghong Wang, Wanlin Chen, Bingfei Hou, and Biao Wang
ObjectivePosition and pose are two basic parameters describing the position and attitude of an object in space, and they are extensively researched in robot grasping, automatic driving, and industrial inspection. Traditional attitude estimation methods such as using mechanical, laser tracker, inertial unit, and other attitude measurement systems have their drawbacks, including the need for contact measurement or susceptibility to interference by ambient light, and optical path complexity. As an optical measurement method, the digital image correlation (DIC) method features strong anti-interference ability and a simple optical path without contact. Meanwhile, it has been widely employed in the measurement of displacement, strain, and mechanical properties, but less research on attitude measurement is conducted. At present, there is a position measurement system based on the DIC method, which adopts the space vector method. This method requires the calculation of the inverse tangent function in rotation angle calculation, which has a large error and requires more calculation points. To deal with the shortcomings of the traditional position measurement system, we propose a position estimation system based on the three-dimensional digital correlation (3D-DIC) method to complete the measurement of multiple position parameters of a rigid body in space. Meanwhile, a new position solution method is put forward for the weaknesses of the existing space vector method, and a new matching calculation method is also proposed to solve the problem of DIC in measuring large rotation angles.MethodsThe mathematical model of position solution based on singular value decomposition (SVD) is first derived, and then the position measurement system is built for experiments. The specimen which has been sprayed with scattering spots is fixed on a moving platform, and the specimen moves along with the movement of the platform. After calibrating the binocular camera, the image sequences before and after the specimen movement are captured by the binocular camera, and the 3D-DIC is employed to match the image sequences before and after the movement and thus obtain the spatial three-dimensional coordinates of the calculation points. After obtaining a set of 3D coordinates before and after the movement of the calculation points, the SVD method is adopted to solve the rotation matrix and translation matrix, with the movement position parameters of the specimen solved. For the large errors of 3D-DIC in measuring large rotational deformation, we propose the matching calculation method of adding intermediate images. The feasibility and accuracy of the proposed method are verified by the translational degree of freedom and rotational degree of freedom experiments. Finally, a set of accuracy comparison experiments with the space vector method are conducted to verify whether this proposed method is better.Results and DiscussionsAfter experimental validation, the position estimation system based on the proposed 3D digital correlation method can realize the measurement of multiple position parameters of a rigid body in space. The absolute errors of the three translational degrees of freedom in the transverse, longitudinal, and elevation are less than 0.07 mm (Fig. 6), and the absolute errors of the yaw and roll angles are less than 0.02° when the rotation angle is less than 10° (Figs. 7 and 9). Meanwhile, the proposed matching calculation method of adding intermediate images also reduces the error of large angle measurement (Fig. 10). The accuracy comparison experiments with the existing space vector method show that the proposed method has smaller measurement errors in rotation angle measurement and requires fewer calculation points (Table 2).ConclusionsWe establish a position estimation system based on the 3D digital image correlation method, and propose a position solution method based on singular value decomposition. The 3D coordinates of the computation point are obtained by taking the image sequence before and after the motion of the object to be measured for the position solution, and multiple position parameter measurement of the spatial rigid body is realized. The results of the three translational degrees of freedom measurement experiments validate that the proposed 3D-DIC-based position measurement system is suitable for measuring the spatial translational degrees of freedom of the rigid body. Additionally, the large-angle measurement experiments verify that the proposed improved matching calculation method which adds intermediate images has obvious improvement in large-angle measurements, and the results of yaw angle and roll angle measurements show that the present measurement system is also applicable to the rotational degree of freedom position measurements of small and large angles. Compared with the traditional position estimation system, our method features high accuracy and a simple optical path without contact. Compared with the existing space vector method, our study has small measurement errors in both yaw and roll angles, and the required number of calculation points is also greatly reduced. In summary, the position and pose measurement system based on our 3D digital image correlation method is suitable for spatial rigid body position measurement, and the measurement accuracy is high, which meets the measurement requirements.
  • Apr. 25, 2024
  • Acta Optica Sinica
  • Vol. 44, Issue 8, 0812005 (2024)
  • DOI:10.3788/AOS231608
Influence Evaluation of Systematic Errors and Prismatic Postures on Field of View in Monocular Stereo Vision
Tianyu Yuan, Xiangjun Dai, and Fujun Yang
ObjectiveMonocular stereo vision features low cost and compactness compared to binocular stereo vision and has a broader application prospect in space-constrained environments. Stereo vision systems based on dual-biprism are widely employed in engineering measurement due to their adjustable field of view (FoV). Compared to other types of monocular vision systems, this method is compact and easy to adjust. Topography reconstruction and deformation measurement are the main application purposes of monocular vision systems. The error factors existing in the imaging system should be considered and evaluated to obtain high-precision measurement results. The acquisition and reconstruction of depth information are crucial for accuracy. The depth equation derived from the optical geometry can be adopted to analyze factors affecting the reconstruction accuracy. Analyzing the influence of object distances and angles on image depth information and disparity in depth equations can provide references for system layout and optimization. Additionally, the artificially placed dual-biprism has offset and rotation, and the errors caused by postures will change the imaging model which is derived in the ideal state. Therefore, model correction considering posture errors is important for high-precision imaging. Meanwhile, the dual-biprism posture will lead to the FoV difference. The quantitative study of the FoV caused by the posture can be helpful for the reasonable arrangement of system layout and object positions. Based on the previous studies, to make the monocular stereo system composed of dual-biprism more applicable to high-precision topographic reconstruction and deformation measurement, we will conduct an in-depth study on the influences of systematic errors and prismatic postures on the FoV.MethodsThe depth equation of the monocular vision system is expressed by geometrical optics and the ray tracing method. By making a small angle assumption and ignoring the distance between the dual-biprism and the camera, a depth equation with parameters such as disparity, included angle, and object distance can be obtained, as demonstrated in Eq. (8). By solving the partial derivative of the depth equation, the relationship among object distance, included angle, and disparity is obtained, as illustrated in Eqs. (9)-(10). The classification of prism postures is discussed, including rotation around the base point and offset along the x- or z-direction, as shown in Fig. 3. According to the systematic error introduced by the prism postures, the imaging model is further modified. Furthermore, the modified model is utilized to analyze the influence of prism postures on the FoV, as described in Eqs. (12)-(14). The experiments include verifying the validity of the derivation of the depth equation by leveraging the DIC results as true values, proving the model correctness by calculating the coordinates of the corner points, and investigating the FoV changes caused by the prismatic postures by matching the coordinates of the corresponding points. First, the experiment of object distance change is carried out. After keeping the object distance unchanged, the included angle of the prism is changed to evaluate the influence of the object distance and included angle on the disparity respectively. The DIC results are compared with the results of Eq. (8) to verify the derivation correctness. The dual-biprism is offset according to the classifications, the image is collected before and after posture changes, and the pixel coordinates of the corners of the whole field are extracted by the corner recognition method. The angular coordinates and offset distance before posture change are substituted into Eqs. (12)-(14). The calculated pixel coordinates are compared with the pixel coordinates identified above to verify the equation derivation correctness. Finally, the influence of postures on the FoV is determined by tracking the pixel coordinates of specific corners in the calibration plate before and after the prism posture changes.Results and DiscussionsThe depth equation for the monocular stereo vision system can be described as Eq. (8). The influence of the parameters on the disparity can be obtained by solving the derivatives regarding the object distance and included angle for the depth equation respectively. The derived equations can be expressed as Eqs. (9) and (10). The depth equation description shows that the disparity decreases with increasing distance and shows a nonlinear change. As shown in Fig. 3, all three posture classifications cause a change in the standard virtual point model. The camera is calibrated after the device is placed to verify the derivation validity. As shown in Fig. 8, the FoV changes introduced by the postures can be obtained by tracking the corner points extracted in the calibration board before and after the posture variations. When the prism group rotates 1° clockwise around the base point, the FoV in each channel will shift anticlockwise, which will also cause the FoV in the overall overlapping area to move in this direction. If only the right prism rotates 1° clockwise, it will make the pixel coordinates of the virtual point in the right channel shift 57 pixels to the right, and the FoV offset of the side channel will reduce the overlapping FoV of the system. When the prism group is offset to the right by 1 mm along the x-direction, the same trend will be introduced. If only the right prism is offset, the virtual point will be offset by 49 pixels to the right. Meanwhile, when the prism group moves 1.4 mm along the positive half-axis of the z-direction, there is no significant FoV change in Fig. 8(c). The speckle images before and after the object distance and angle changes are captured, with the disparity map obtained. Then, the depth map can be computed using Eq. (8), in which the depth information of each point in the overlapping FoV can be obtained. The profile of the measured object can be obtained using the coordinate transformation method. The derivation correctness of the equation can be verified by selecting three cross-sections on the object and comparing the profiles of the object obtained by the two methods, as shown in Figs. 9-11. The corrected models considering the prismatic postures are illustrated in Eqs. (12)-(14). The pixel coordinates of the corner points obtained before the posture change are calculated by substituting them into Eqs. (12)-(14) to obtain the offset coordinates, which can be compared with the pixel coordinates of the corner points extracted after the offset to verify the correctness of the corrected model, as shown in Fig. 12.ConclusionsThe relationship between depth equation and disparity in prism-splitting type monocular stereo vision systems is studied, with the system error introduced by the dual-biprism postures considered. The depth equation of the system is derived by combining the virtual point model and ray tracing method. By solving the derivative of the depth equation, the influence of object distance and included angle on disparity is studied. The results show that the disparity of the image increases with the reducing object distance and rising included angle. The imaging model is modified to address the system errors introduced by postures. The experimental results show that the pixel coordinates of virtual points can be accurately calculated using the modified model with known offset distances of the dual-biprism and world coordinates of spatial points, which can determine the mapping relationship of spatial points for different prism postures. Finally, the rotation of the dual-biprism or the offset along the direction perpendicular to the optical axis of the camera will cause the FoV of the system to change, while the posture change along the optical axis of the camera will only reduce the imaging range. Finally, we can provide references for high-precision reconstruction and deformation measurement of monocular stereo vision systems composed of optical elements.
  • Apr. 25, 2024
  • Acta Optica Sinica
  • Vol. 44, Issue 8, 0812004 (2024)
  • DOI:10.3788/AOS231629
Application of Dual Plane-Mirror Dual-Camera Digital Image Correlation Technology in Three-Dimensional Reconstruction
Guihua Li, Ziwei Wang, Weiqing Sun, Pengxiang Ge, Haoyu Wang, and Mei Zhang
ObjectiveDigital image correlation (DIC) technology is a processing method commonly employed for image matching, and meanwhile after nearly forty years of development, its accuracy, efficiency, practicality, and other aspects have yielded significant improvement. With the development and progress of science and technology, DIC technology for three-dimensional (3D) measurement should also be economical and practical, with the utilization of relatively simple devices for a full range of functions. DIC measurement systems with the assistance of the camera and external equipment can also be realized with multiple viewpoints and multi-directional measurements, on which many scholars have carried out thorough research. Among them, the single camera system has a flexible field of view adjustment and simple optical path, but features poor stability and low accuracy. The multi-camera system requires the adoption of multiple cameras, and the calibration process is complicated. Although the multi-camera measurement system can improve the range and accuracy of 3D measurement, it is difficult to be widely leveraged in 3D full-field measurement due to the high requirements of environmental conditions and the expensive cameras. Given the shortcomings of the existing single-camera and multi-camera systems, we propose a dual-camera 3D reconstruction method assisted by dual plane-mirror.MethodsWe put forward a visual 3D measurement method assisted by a dual plane-mirror, which is to analyze the virtual and real correspondences of the corner points in the mirror surface and thus obtain the reflection transformation matrix. Meanwhile, the virtual and real transformations of the object surface are completed by the reflection transformation matrix, and the 3D full-field measurements are realized finally. Additionally, this method avoids spraying diffuse spots on mirrors to take up the spatial resolution of the camera, making the solution of the reflection transformation matrix easy and efficient. Firstly, the image information of the front surface and the back side surface of the object is acquired simultaneously by the dual-camera DIC measurement system (Fig. 1). Secondly, the calibration plate is placed in front of the plane mirror, and the dual cameras can observe the real calibration plate and the virtual image in the mirror at the same time (Fig. 4). The midpoint method based on the common vertical line is adopted to determine the 3D coordinates of the corner points in space (Fig. 5), and the specific positions of the dual plane mirrors are finally determined by changing the position of the calibration plate several times. Finally, the reflection transformation matrix is solved by the mirror position equation, and then the 3D reconstruction of the object is completed.Results and DiscussionsTo verify the accuracy of the proposed method, we conduct static and dynamic experiments on the measured parts respectively. In the static experiments, the 3D contour of the game coin is reconstructed, and in the dynamic experiments, the thermal deformation of the five-side aluminum column is investigated (Fig. 6). By employing the proposed method, the dual-mirror equation and reflection transformation matrix can be obtained under the mirror angle of 108° (Table 1). The front and back contours of the ordinary game coin are reconstructed in three dimensions, the theoretical thickness of the game coin is 1.80 mm, and the measured thickness is around 1.75 mm (Fig. 9). The reconstruction method of spraying diffuse spots on the mirror surface is compared to verify the 3D reconstruction accuracy of the proposed method (Fig. 9), and the 3D reconstruction of the game coin by the proposed method is found to be better than that of spraying diffuse spots on the mirror surface. Meanwhile, the proposed method avoids taking up the spatial resolution of the camera by spraying diffuse spots on the mirror surface, with higher accuracy.Aluminum column 3D reconstruction and thermal deformation measurement results are shown. Firstly, the reconstruction results of the surface profile of the five-side aluminum column are obtained by the proposed method (Fig. 10), the real height of the aluminum column is 70.00 mm±0.01 mm, and the average measurement height is 70.0035 mm, which is a sound measurement effect. Secondly, the average height change of the five outer surfaces of the aluminum column can be obtained during thermal deformation (Table 2). The thermal deformation displacement cloud map of the outer surfaces is shown in Fig. 11 and the height change of different surfaces in the cooling process is illustrated in Fig. 12. To more intuitively demonstrate the accuracy of the proposed method of real-virtual transformation, we compare and analyze the deviation values of the height change obtained by the two methods at each node (Fig. 13), which shows that the proposed method has higher measurement accuracy.ConclusionsWe propose a dual plane mirror-assisted visual DIC 3D full-field measurement method. The static experiment results indicate that the proposed method is better than the reconstruction method of spraying diffuse spots on mirrors for the 3D reconstruction of game coins with higher accuracy. The results of dynamic thermal deformation experiments indicate that when the temperature of the five-side aluminum column is reduced from 330 to 20 ℃, the height change of the outer surfaces of the column is basically consistent with the simulation results of the finite element software, and the deviation values of the height change measured by the proposed method are smaller than those of the method of spraying diffuse spots on mirrors. Since the proposed method can avoid spraying diffuse spots on mirrors to take up the spatial resolution of the camera, it features simple operation, high measurement accuracy, and sound application perspectives.
  • Apr. 25, 2024
  • Acta Optica Sinica
  • Vol. 44, Issue 8, 0812003 (2024)
  • DOI:10.3788/AOS231877
Processing Method of Infrared Sequence Images Based on Long Pulse Thermal Excitation
Yanjie Wei, and Yao Xiao
ObjectiveDefects such as debonding, bulges, pores, pits, delaminations, and inclusions in composites are common during manufacture and service. They not only reduce strength and stiffness but also fail structures. Reliable non-destructive testing methods are required to assess the quality of composite materials. Long pulse thermography (LPT) is a full-field, non-contact, and non-destructive testing method based on image visualization that provides an efficient way to assess the defect quality. However, the defect visibility of LPT can be compromised by various factors such as experimental conditions, heat intensity, inherent material properties, and noise. The LPT effectiveness is constrained by fuzzy edges and low-contrast defects. Consequently, enhancing defect visibility via signal processing methods is crucial for inspecting defects in composite materials using LPT. Thus, we propose an infrared image sequence processing method that utilizes Fourier transform, phase integration, and edge-preserving filters to enhance the quality of LPT detection results for composite materials. Meanwhile, a few latent variables that better reflect the defect information inside the specimen are proposed by transforming the temperature information of the surface during the cooling period. These variables can eliminate the influence of uneven heating and improve defect visualization. This method enables clear delineation of defect edges and accurate measurement of defect sizes. Our approach and findings are expected to contribute to qualitative and quantitative measurements in the non-destructive testing of composite structures.MethodsWe propose a novel infrared image sequence processing algorithm to enhance the defect visibility of LPT. This approach comprises four steps of background uniformity processing, phase extraction, frequency domain integration, and image quantization. Initially, thermal data is acquired after a square pulse heating period and subsequently pre-processed to eliminate the inhomogeneity of the initial temperature distribution. Subsequently, phase Fourier analysis is conducted to extract the phase information related to defects of varying depths and sizes. Next, the phase difference between defect and sound regions is pixel-wise integrated along frequencies to integrate defect information into a new image. Lastly, the integrated phase image transforms into an 8-bit visual image by applying edge-preserving filters and local adaptive Gamma correction.Results and DiscussionsTo evaluate the effectiveness of the proposed method, we conduct an experiment using a glass fiber reinforced polymer (GFRP) panel and compare it with various thermal signal processing methods. The efficacy of the proposed method is substantiated via qualitative and quantitative analysis, with the influence of acquisition parameters additionally discussed. Figure 7 illustrates the raw infrared images captured in different instances. The defects with deep depths have low contrast and fuzzy edges. The phase images processed by background uniformity and Fourier transform are depicted in Figs. 9(a)-9(c). The visibility of defects in these phase images is improved compared to the raw images. However, the deeper defects are more obvious in the phase images at low frequencies and vice versa. It is challenging to identify all defects at various depths using only phase images at a single frequency. To this end, the frequency domain integration method is utilized to amalgamate the phase information of all defects, and subsequently, the resulting phase integration image is enhanced and quantified. The processed results are presented in Fig. 9(d), where all 20 defects of various depths and sizes are distinguishable. The edges of the defects are visible, which facilitates subsequent image segmentation and edge extraction processing for accurate defect size measurement. Additionally, three traditional thermal signal processing algorithms of absolute thermal contrast, thermographic signal reconstruction, and principal component analysis are also compared. Figures 11 and 12 highlight the superiority of the proposed method from qualitative and quantitative perspectives respectively. Analyzing the variations in temperature difference over time and the signal-to-noise ratio across various sampling frequencies (Fig. 13) allows for determining the optimal acquisition time of 30 seconds and a sampling frequency of 30 Hz, striking a balance between computational efficiency and detection effectiveness.ConclusionsWe employ a homemade infrared non-destructive testing system utilizing LPT for the experiments. A method for processing infrared image sequences based on Fourier transform, phase integration, and edge-preserving filters is developed to mitigate the influence of uneven heating and enhance the contrast of defects. The inspection results of the GFRP panel demonstrate that phase signals can offer more information about defects, and integrating phase information across all frequencies significantly enhances detection performance compared to a fixed-frequency signal phase image. Meanwhile, the accurate defect size measurement in segmented images further validates the reliability of the proposed method. An important advantage of this method is that fewer parameters should be determined, specifically the optimum sampling time and frame rate. Other data dimensionality reduction techniques such as ATC, TSR, or PCA can yield multiple principal component images requiring human visual interpretation. In contrast, the proposed method generates a single optimal detection image, which significantly amplifies the detection automation. Finally, our study provides guidance for practical non-destructive inspection of composite structures.
  • Apr. 25, 2024
  • Acta Optica Sinica
  • Vol. 44, Issue 8, 0812001 (2024)
  • DOI:10.3788/AOS231805
Phase Extraction Method for Single Interferogram Based on Light Intensity Iteration
Xiangyu Zhang, Ailing Tian, Zhiqiang Liu, Hongjun Wang, Bingcai Liu, and Xueliang Zhu
ObjectivePrecision optical components are widely employed in various optical systems, and the surface shape quality of optical components directly affects the performance of optical devices. Therefore, surface shape detection of optical components is of great significance. Interferometry is widely recognized as the most effective method for surface shape detection, among which phase-shifting interferometry has higher detection accuracy. However, during the continuous collection of multiple interferograms with phase differences, it is constrained by the performance of the phase-shifting components and easily affected by such objective factors as mechanical vibration and air disturbance in the environment, which decreases the detection accuracy. Therefore, it is not suitable for on-site production testing. In recent years, researchers have proposed a method that combines carrier interferometry with Fourier analysis technology to achieve phase extraction of a single interferogram. However, generally, there are still shortcomings such as large tilt direction edge errors, stripe stacking phenomenon, and low recovery accuracy. To solve the problem of low accuracy in phase extraction of single interferograms in the above phase-solving methods, we propose a new single interferogram phase extraction method based on light intensity iteration. Meanwhile, simulations and experimental research are conducted, with the stability of the algorithm analyzed.MethodsWe adopt a combination of simulation and experiment methods, analyze and explore the principle of the light intensity iteration method, and employ MATLAB to write algorithm programs while conducting simulation verification. The feasibility, stability, and noise resistance of the algorithm are explored via simulations to ensure the algorithm performance. By conducting 100 sets of simulation simulations, the final phase residuals are compared, and the convergence conditions suitable for solving single interference fringes and the solution interval with the best measurement performance are obtained. To ensure the innovation and optimization ability of the algorithm, we conduct a comparison with the Fourier transform method. Finally, multiple experiments are carried out using the ZYGO-Verifire PE phase-shifting interferometer to measure optical components. Multiple sets of experiments are conducted in an experimental environment with temperature of 23 ℃ and air humidity of 75.3%. Meanwhile, a single interference fringe pattern is collected and the phase is solved using the proposed algorithm. The results are compared, and the effectiveness of the algorithm is evaluated by residual PV and RMS values to achieve phase extraction of the single interference fringe.Results and DiscussionsOur algorithm can ensure the algorithm stability while improving detection accuracy. By adopting the Bernsen algorithm to binarize the original interferogram and further obtain a stepped predicted phase (Fig. 3), initial information is provided for subsequent light intensity iterations. The use of binarization to predict phases provides a new approach for iterative methods. The feasibility and anti-noise ability of this method are demonstrated by comparing it with the Fourier transform method (Fig. 4). Compared with the Fourier transform method, the proposed method has higher solving accuracy and faster solving speed. Meanwhile, its anti-noise ability is not significantly different from that of the Fourier method, both of which have sound anti-noise ability. By conducting hundreds of simulation experiments, convergence conditions that do not affect computational efficiency and avoid excessive iterations are obtained. The study on the effect of the fringe number on the accuracy of the algorithm solution shows that generally the size of the algorithm residual presents a trend of first decreasing and then increasing with the rising number of fringes (Fig. 8). Data comparison shows that the algorithm has the highest solution accuracy when processing a single interference fringe pattern with 4 to 5 fringes.ConclusionsWe propose a phase solution method based on the light intensity iteration method. Firstly, the original interferogram is binarized and the initial phase is obtained by phase unwrapping. Then, the background light and modulated light are preliminarily estimated by adopting the least square method using the interference intensity expression. The measured phase is calculated using a variation of the interference intensity expression. The measured phase is compared with the initial phase as a convergence judgment. The initial phase is replaced with a surface shape that does not meet the accuracy requirements. The background light and modulated light are updated, and the phase solution process is repeated. By light intensity iteration, the phase is extracted from a single interferogram. Meanwhile, solution accuracy, noise resistance, and algorithm stability are simulated and analyzed. Experimental measurements are conducted on a 100 mm planar element, and the results show that the obtained phase distribution of the proposed method is consistent with the phase obtained by the four-step phase-shifting algorithm of the ZYGO-Verifire PE phase-shifting interferometer. Compared to those obtained by the interferometer, the residual PV and RMS values obtained by the light intensity iteration method are 2.49 nm and 0.35 nm respectively. This indicates that the proposed method featuring high stability and efficiency can extract phase distribution from single fringes and can meet the testing needs of the production site environment.
  • Apr. 10, 2024
  • Acta Optica Sinica
  • Vol. 44, Issue 7, 0712002 (2024)
  • DOI:10.3788/AOS231891
Adaptive Coding Fringe Projection Profilometry on Color Reflective Surfaces
Ying Wang, Yubo Ni, Zhaozong Meng, Nan Gao, Tong Guo, Zeqing Yang, Guofeng Zhang, Wei Yin, Hongwei Zhao, and Zonghua Zhang
ObjectiveFringe projection profilometry is widely employed to reconstruct the three-dimensional (3D) shape of an object surface. However, when this method is utilized to measure objects with color reflective surfaces, the image captured by the camera is oversaturated with pixels due to ambient lighting and reflections from the projected fringes, which results in the inability to measure the surface of the reflective area. This problem is mainly due to the unevenly varying reflective of surfaces, which is affected by both the roughness and the surface color. To solve the problem of eliminating the interference of the object surface color and complete the 3D shape measurement method based on the reflectivity change of colored highly reflective surfaces, we propose an adaptive generation of complementary color sinusoidal fringes method. By different absorption of colors by the object surface color to be measured, a complementary color of lighting is projected onto the highly reflective area to reduce the surface reflectivity of the region and suppress the exposure phenomenon.MethodsWe put forward a method to measure the 3D shape of colored objects with high reflectivity, which is based on adaptively encoded complementary color fringes. Firstly, the highly reflective region of the object to be measured should be located. The image of the object surface is captured by the camera when the projector projects the strongest white light, and the coordinates of the oversaturated pixel points are extracted by an inverse projection technique. The location of the highly reflective region in the coordinate system of the projected image is obtained via the matching relationship between the projector and the camera. Then, the optimal color adopted for projecting the highly reflective region of the object is calculated by the color image of the object surface and then captured by the camera. The projecting color obtained in the previous step is employed to generate an image that is projected to the highly reflective region on the measured surface. The saturation value of the adopted projecting color is adjusted according to the magnitude of the adjacent light intensity values at either end of the boundary encoded color until the adjacent light intensity values are less than 20. Finally, after sinusoidal fringes on the V component of the HSV color space are encoded, and meanwhile adaptive complementary color sinusoidal fringe patterns are generated and projected onto the object surface to be measured. The complete 3D shape of the object surface to be measured is recovered by solving the unwrapped phase.Results and DiscussionsThe proposed method employs adaptively encoded complementary color fringes. It reduces the reflectivity of the highly reflective region on the surface, solves the unwrapped phase loss after utilizing traditional fringe projection profilometry, and finally obtains the complete 3D shape of the yellow ceramic cup (Fig. 5). Additionally, the phase resolution results of the yellow ceramic cup are compared and analyzed by traditional gray fringes and the proposed complementary color-coded fringes under different exposure time. The results show that when the exposure time is greater than 40 ms, the phase recovery completeness of the region D is maintained at 100% (Fig. 6) by applying the proposed method. The purpose of measuring the complete 3D shape of the surface of a color highly reflective object by projecting only a set of adaptively encoded complementary color sinusoidal fringe patterns is achieved. Meanwhile, the mean error of the proposed method is 0.5281 mm, smaller than that of the traditional multiple exposure method. In conclusion, this method is not only more efficient than the traditional multiple exposure method in the measurement process but also improves the measurement accuracy.ConclusionsTo address the challenges in measuring the 3D shape of colored highly reflective objects, we propose a novel fringe projection profilometry method based on adaptive color encoding. The proposed method encodes and projects fringe structured light complementary to the measured surface color into the highly reflective region in the HSV color space based on the theory of photometric complementarity. As a result, it reduces the surface reflectivity of the highly reflective region and achieves 3D shape measurement of colored highly reflective objects. The experimental results show that this method reduces the number of projected images during the measurement compared with the traditional multiple exposure methods. Only a set of adaptively encoded complementary color sinusoidal fringe maps should be projected to obtain a complete 3D shape of the surface of a colored highly reflective object. The proposed method shows certain advantages in measurement efficiency and accuracy.
  • Apr. 10, 2024
  • Acta Optica Sinica
  • Vol. 44, Issue 7, 0712001 (2024)
  • DOI:10.3788/AOS231894
Laser Radar 3D Target Detection Based on Improved PointPillars
Feng Tian, Chao Liu, Fang Liu, Wenwen Jiang, Xin Xu, and Ling Zhao
A 3D object detection method based on improved PointPillars model is proposed to address the problem of poor detection performance of small objects in current point cloud based 3D object detection algorithms. First, the pillar feature network in the PointPillars model is improved, and a new pillar encoding module is proposed. Average pooling and attention pooling are introduced into the encoding network, fully considering the local detailed geometric information of each pillar module, which improve the feature representation ability of each pillar module and further improve the detection performance of the model on small targets. Second, based on ConvNeXt, the 2D convolution downsampling module in the backbone network is improved to enable the model extract rich context semantic information and global features during feature extraction process, thus enhancing the feature extraction ability of the algorithm. The experimental results on the public dataset KITTI show that the proposed method has higher detection accuracy. Compared with the original network, the improved algorithm has an average detection accuracy improvement of 3.63 percentage points, proving the effectiveness of the method.
  • Apr. 25, 2024
  • Laser & Optoelectronics Progress
  • Vol. 61, Issue 8, 0812007 (2024)
  • DOI:10.3788/LOP231493
Multiscale Deformation Monitoring Based on Terrestrial 3D Laser Scanning Technology
Xiantao Guo, Lijun Yang, and Ya Kang
Regarding the deficiency of traditional deformation monitoring in effectively detailing deformation of local unique monitoring objects due to the overall deformation model, this paper proposes a three-layer mixed deformation model, i.e., block, region, and overall deformation, based on terrestrial 3D laser scanning technology. A block-based deformation calculation method is also designed. This method mainly includes object segmentation, deformation estimation, and deformation fusion, and can automatically extract deformation information of different scales without prior monitoring information. Simulation results show that under this method, the mean angle change estimation error of RANSAC algorithm plane fitting regression is 1.21″, and the estimation reliability increases with an increase in block size within a certain range. The results of the landslide experiment show that the minimum value method has less displacement estimation noise, and a 0.2 m block size segmentation can provide further deformation estimation details. The proposed method is particularly suitable for monitoring fields with nonuniform deformation characteristics, and has certain theoretical and practical significance for promoting the transformation of disaster monitoring from"point monitoring"to"surface monitoring"for landslides and other disasters that are difficult for personnel to reach.
  • Apr. 25, 2024
  • Laser & Optoelectronics Progress
  • Vol. 61, Issue 8, 0812009 (2024)
  • DOI:10.3788/LOP232304